Who can I trust? Investigating Trust Between Users and Agents in A Multi-Agent Portfolio Management System
نویسندگان
چکیده
Trust between agents has been explored extensively in the literature. However, trust between agents and users has largely been left untouched. In this paper, we report our preliminary results of how reinforcement-learning agents (i.e. broker agents, or brokers) win the trust of their client in an artificial market I-TRUST. The goals of these broker agents are not only to maximize the total revenue subject to their clients’ risk preference as most other agents do in [LeBaron et al. 1997; Parkes and Huberman 2001; Schroeder et al. 2000], but also to maximize the trust they receive from their clients. Trust is introduced into I-TRUST as a relationship between clients and their software broker agents in terms of the amount of money they are willing to give to these agents to invest on their behalf. To achieve this, broker agents must first elicit user models both explicitly through questionnaires and implicitly through three games. Then based on the initial user models, a broker agent will learn to invest and later update the model when necessary. In addition to the broker agent’s individual learning of how to maximize the ‘reward’ he may receive from his client, we have incorporated agents’ cooperative reinforcement learning to adjust their portfolio selecting strategy, which is implemented in FIPA-OS. A large-scale experiment is expected as our future research.
منابع مشابه
Investigating Trust between Users and Agents in A Multi Agent Portfolio Management System: a Preliminary Report
We have witnessed considerable research investigating trust between agents in multi-agent systems. However, the issue of trust between agents and users has rarely been reported in the literature. In this paper, we describe our experiences with ITRUST, a multi-agent artificial market system whose software broker agent can learn to build a relatively long-term trust relationship with their client...
متن کاملI-TRUST: investigating trust between users and agents in a multi-agent portfolio management system
Trust between agents has been explored extensively in the literature. However, trust between agents and users has largely been left untouched. In this paper, we report our preliminary results of how reinforcement-learning agents (i.e. broker agents, or brokers) win the trust of their client in an artificial market I-TRUST. The goals of these broker agents are not only to maximize the total reve...
متن کاملInvestigating the Effect of Social Business Characteristics on Trust and Willingness to Partnership
Objective Social business is a sub-category of electronic business that seeks social, innovative and cooperative approaches within online markets and also uses social media to attract social partnership and cooperation of such network users to support online purchasing and services. Trust is considered as an effective factor leading to successful social business. Because of the growing populari...
متن کاملTrust Representation and Aggregation in a Distributed Agent System
This paper considers a distributed system of software agents who cooperate in helping their users to find services, provided by different agents. The agents need to ensure that the service providers they select are trustworthy. Because the agents are autonomous and there is no central trusted authority, the agents help each other determine the trustworthiness of the service providers they are i...
متن کاملResponding to Sneaky Agents in Multi-agent Domains
This paper extends the concept of trust modeling within a multi-agent environment. Trust modeling often focuses on identifying the appropriate trust level for the other agents in the environment and then using these levels to determine how to interact with each agent. However, this type of modeling does not account for sneaky agents who are willing to cooperate when the stakes are low and take ...
متن کامل